Better Runtime Guarantees via Stochastic Domination

نویسنده

  • Benjamin Doerr
چکیده

Apart from few exceptions, the mathematical runtime analysis of evolutionary algorithms is mostly concerned with expected runtimes. In this work, we argue that stochastic domination is a notion that should be used more frequently in this area. Stochastic domination allows to formulate much more informative performance guarantees than the expectation alone, it allows to decouple the algorithm analysis into the true algorithmic part of detecting a domination statement and probability theoretic part of deriving the desired probabilistic guarantees from this statement, and it allows simpler and more natural proofs. As particular results, we prove a fitness level theorem which shows that the runtime is dominated by a sum of independent geometric random variables, we prove tail bounds for several classic problems, and we give a short and natural proof for Witt's result that the runtime of any $(\mu,p)$ mutation-based algorithm on any function with unique optimum is subdominated by the runtime of a variant of the (1+1) evolutionary algorithm on the OneMax function.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Kernelized Stochastic Batch Perceptron

We present a novel approach for training kernel Support Vector Machines, establish learning runtime guarantees for our method that are better then those of any other known kernelized SVM optimization approach, and show that our method works well in practice compared to existing alternatives.

متن کامل

Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization

Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled with replacement. In practice, however, sampling without replacement is very common, easier to implement in many cases, and often performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling, under various scenarios, f...

متن کامل

A New Approach to Approximate Completion Time Distribution Function of Stochastic Pert Networks

The classical PERT approach uses the path with the largest expected duration as the critical path to estimate the probability of completing a network by a given deadline. However, in general, such a path is not the most critical path (MCP) and does not have the smallest estimate for the probability of completion time. The main idea of this paper is derived from the domination structure between ...

متن کامل

Without-Replacement Sampling for Stochastic Gradient Methods

Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled with replacement. In contrast, sampling without replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling...

متن کامل

Shampoo: Preconditioned Stochastic Tensor Optimization

Preconditioned gradient methods are among the most general and powerful tools in optimization. However, preconditioning requires storing and manipulating prohibitively large matrices. We describe and analyze a new structure-aware preconditioning algorithm, called Shampoo, for stochastic optimization over tensor spaces. Shampoo maintains a set of preconditioning matrices, each of which operates ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018